Of 2,000 scientists surveyed online, 91% said using scientific software is important for their own research, 84% said developing scientific software is important for their own research [...] and 38% spend at least one fifth of their time developing software.
“You can download our code from the URL supplied. Good luck downloading the only postdoc who can get it to run, though” (Ian Holmes)
Pros :
Cons :
Pros :
Cons :
Scientists need to easily explore new ideas :
Need for customizations (avoid black boxes, try variations)
But also need for performance (intrinsic goal or to allow re-use)
What is done now (when you need to go further than using existing packages, for e.g. data analysis)
1) Prototyping in R/Python/Matlab
2) Rewriting whole or parts (if money/man power) to a performant low-level language as C/Fortran

Its goal : Solve the languages problem by having a dynamic, high level language with good mathematical expressivity able to be as fast as C/Fortran.
using LinearAlgebra
A = randn(4,4) # 4 x 4 matrix of normally distributed random numbers
4×4 Array{Float64,2}:
-0.70136 -0.752927 1.41348 -0.255463
0.677687 1.07635 0.657467 0.784319
0.118669 -0.0572054 -0.422288 -1.18301
-1.33039 -0.949331 0.117225 -0.321876
A[:,1] # familiar colon syntax --extract first column of A
4-element Array{Float64,1}:
-0.7013595160585986
0.6776868447495509
0.11866914773302156
-1.330392501171306
b = randn(4)
x = A\b # solve Ax=b for x using backslash operator
norm(A*x-b)
2.5589376332604516e-16
U, σ, V = svd(A); # singular value decomposition --note unicode variable name σ
Σ = diagm(0 => σ)
norm(A - U*Σ*V') # compute error of SVD factorization A = U Σ V'
2.71335595649699e-15
function loop()
a=0
for i in 1:1_000_000
a+=1;
end
end
@time loop # compile time
@time loop # now fast
0.000001 seconds (4 allocations: 160 bytes) 0.000002 seconds (4 allocations: 160 bytes)
loop (generic function with 1 method)
function [] = loop()
a=0;
for i=1:1000000
a=i+1;
end
end
f = @() loop();
timeit(f) => 0.0018s
2000x slower !
Takeaway: Julia timings are clustered around C timing, Matlab/Python/R orders of magnitude slower.

Takeaway: The Julia PDE code is almost as compact as Matlab/Python, almost as fast as C/Fortran.

Functions are compiled to machine code when first run. Subsequent runs are as fast as compiled C, C++, Fortran.
f(x) = 4x*(1-x) # define logistic map
@time f(0.3); # first run is slow
@time f(0.4); # second run is a thousand times faster
0.008059 seconds (17.54 k allocations: 979.150 KiB) 0.000002 seconds (5 allocations: 176 bytes)
user Julia -> generic Julia expression -> typed Julia expression -> intermediate compiler language -> machine code
include("macros.jl");
@code_lowered f(0.3) # show f(x) as generic Julia expression
CodeInfo( │1 1 ─ %1 = 4 * x │ │ %2 = 1 - x │ │ %3 = %1 * %2 │ └── return %3 )
@code_typed f(0.3) # show f(x) as Julia expr with inferred types, based on the arg types
CodeInfo( │╻╷ *1 1 ─ %1 = (Base.mul_float)(4.0, x)::Float64 ││╻ - │ %2 = (Base.sub_float)(1.0, x)::Float64 │╻ * │ %3 = (Base.mul_float)(%1, %2)::Float64 │ └── return %3 ) => Float64
The code is then converted to intermediate compiler language (LLVM), and finally compiled by LLVM to machine code
@code_llvm_nocomment f(0.3) # show f(x) in intermediate compiler language (LLVM)
define double @julia_f_36594(double) {
top:
%1 = fmul double %0, 4.000000e+00
%2 = fsub double 1.000000e+00, %0
%3 = fmul double %1, %2
ret double %3
}
@code_native_nocomment f(0.3) # show f(x) in IA-64 assembly language
.text movabsq $139964215993512, %rax # imm = 0x7F4BF56024A8 vmulsd (%rax), %xmm0, %xmm1 movabsq $139964215993520, %rax # imm = 0x7F4BF56024B0 vmovsd (%rax), %xmm2 # xmm2 = mem[0],zero vsubsd %xmm0, %xmm2, %xmm0 vmulsd %xmm0, %xmm1, %xmm0 retq nopw %cs:(%rax,%rax)
We can see that if we call f on an Integer we get a code specialised for integer (i64)
@code_llvm_nocomment f(0.3) # show f(x) in intermediate compiler language (LLVM)
define double @julia_f_36594(double) {
top:
%1 = fmul double %0, 4.000000e+00
%2 = fsub double 1.000000e+00, %0
%3 = fmul double %1, %2
ret double %3
}
@code_llvm_nocomment f(3) # show f(x) in intermediate compiler language (LLVM)
define i64 @julia_f_36732(i64) {
top:
%1 = shl i64 %0, 2
%2 = sub i64 1, %0
%3 = mul i64 %1, %2
ret i64 %3
}
No need to declare type but need to be type stable...
function g1(a,n)
x = 0
for i in 1:n
x += a
end
end
@code_warntype g1(2.0,10) # see the Union{Float64, Int64}
# @code_llvm_nocomment g1(2.0,10)
Body::Nothing │╻╷╷╷╷ Colon3 1 ── %1 = (Base.sle_int)(1, n)::Bool ││╻ Type │ (Base.sub_int)(n, 1) │││┃ unitrange_last │ %3 = (Base.ifelse)(%1, n, 0)::Int64 ││╻╷╷ isempty │ %4 = (Base.slt_int)(%3, 1)::Bool ││ └─── goto #3 if not %4 ││ 2 ── goto #4 ││ 3 ── goto #4 │ 4 ┄─ %8 = φ (#2 => true, #3 => false)::Bool │ │ %9 = φ (#3 => 1)::Int64 │ │ %10 = (Base.not_int)(%8)::Bool │ └─── goto #15 if not %10 │ 5 ┄─ %12 = φ (#4 => 0, #14 => %27)::Union{Float64, Int64} │ │ %13 = φ (#4 => %9, #14 => %33)::Int64 │ 4 │ %14 = (isa)(%12, Float64)::Bool │ └─── goto #7 if not %14 │ 6 ── %16 = π (%12, Float64) │╻ + │ %17 = (Base.add_float)(%16, a)::Float64 │ └─── goto #10 │ 7 ── %19 = (isa)(%12, Int64)::Bool │ └─── goto #9 if not %19 │ 8 ── %21 = π (%12, Int64) ││╻╷╷╷ promote │ %22 = (Base.sitofp)(Float64, %21)::Float64 ││╻ + │ %23 = (Base.add_float)(%22, a)::Float64 │ └─── goto #10 │ 9 ── (Core.throw)(ErrorException("fatal error in type inference (type bound)")) │ └─── $(Expr(:unreachable)) │ 10 ┄ %27 = φ (#6 => %17, #8 => %23)::Float64 ││╻ == │ %28 = (%13 === %3)::Bool ││ └─── goto #12 if not %28 ││ 11 ─ goto #13 ││╻ + 12 ─ %31 = (Base.add_int)(%13, 1)::Int64 │╻ iterate └─── goto #13 │ 13 ┄ %33 = φ (#12 => %31)::Int64 │ │ %34 = φ (#11 => true, #12 => false)::Bool │ │ %35 = (Base.not_int)(%34)::Bool │ └─── goto #15 if not %35 │ 14 ─ goto #5 │ 15 ─ return
function g2(a,n)
x = zero(a) # get the zero of the type of a
for i in 1:n
x += a
end
end
@code_warntype g2(2.0,10) # the type stable version is shorter thus faster
#@code_llvm_nocomment g2(2.0,10)
Body::Nothing │╻╷╷╷╷ Colon3 1 ── %1 = (Base.sle_int)(1, n)::Bool ││╻ Type │ (Base.sub_int)(n, 1) │││┃ unitrange_last │ %3 = (Base.ifelse)(%1, n, 0)::Int64 ││╻╷╷ isempty │ %4 = (Base.slt_int)(%3, 1)::Bool ││ └─── goto #3 if not %4 ││ 2 ── goto #4 ││ 3 ── goto #4 │ 4 ┄─ %8 = φ (#2 => true, #3 => false)::Bool │ │ %9 = φ (#3 => 1)::Int64 │ │ %10 = (Base.not_int)(%8)::Bool │ └─── goto #10 if not %10 │ 5 ┄─ %12 = φ (#4 => 0.0, #9 => %14)::Float64 │ │ %13 = φ (#4 => %9, #9 => %20)::Int64 │╻ +4 │ %14 = (Base.add_float)(%12, a)::Float64 ││╻ == │ %15 = (%13 === %3)::Bool ││ └─── goto #7 if not %15 ││ 6 ── goto #8 ││╻ + 7 ── %18 = (Base.add_int)(%13, 1)::Int64 │╻ iterate └─── goto #8 │ 8 ┄─ %20 = φ (#7 => %18)::Int64 │ │ %21 = φ (#6 => true, #7 => false)::Bool │ │ %22 = (Base.not_int)(%21)::Bool │ └─── goto #10 if not %22 │ 9 ── goto #5 │ 10 ─ return
Given $f(x) = 4\,x \, (1-x)$,
let $~~f^N(x) = f(\,f(\,...(\,f(\,f(x)))))$ be the millionth iterate of $f$, with $N=10^6$.
# define function that, given function g(x),
# returns iterated function gᴺ(x) = g(g(...(g(g(x)))))
function iterator(g, N)
# construct gᴺ, the Nth iterate of g
function gᴺ(x)
for i ∈ 1:N
x = g(x)
end
return x
end
return gᴺ
end
fᴺ = iterator(f, 10^6); # generate millionth iterate funcion fᴺ(x) for f(x) = 4x(1-x)
fᴺ(0.67) # run fᴺ(x) once to compile it
0.10116885334547539
$f^N$ in C++, compiled
; cat codes/fN.cpp
#include <stdlib.h>
#include <iostream>
#include <iomanip>
#include <ctime>
using namespace std;
inline double f(double x) {
return 4*x*(1-x);
}
int main(int argc, char* argv[]) {
double x = argc > 1 ? atof(argv[1]) : 0.0;
double t0 = clock();
for (int n=0; n<1000000; ++n)
x = f(x);
double t1 = clock();
cout << "t = " << (t1-t0)/CLOCKS_PER_SEC << " seconds" << endl;
cout << setprecision(17);
cout << "x = " << x << endl;
return 0;
}
print("t=");
@time x = fᴺ(0.67); # run the Julia fᴺ on x = 0.67
@show x;
t= 0.004666 seconds (5 allocations: 176 bytes) x = 0.10116885334547539
; g++ -O3 -o fN codes/fN.cpp -lm
; ./fN 0.67
t = 0.004345 seconds x = 0.10116885334547539
Julia and C++ get the same result x in roughly same execution time t. Sometimes Julia is faster, sometimes C++.
>> tic(); x=fN(0.67); t=toc();
>> t, x
t = 0.048889000000000
x = 0.101168853345475
Matlab gets the same result $x$, but ten to twenty times slower than Julia or C++.

using Pkg #load the Pkg stdlib
Pkg.activate(".") #activate the local environment
"/home/raphaelb/Projets/Perso/tests-julia/julia-intro/Project.toml"
The Hilbert matrix $A_{ij} = (i+j-1)^{-1}$ is notoriously ill-conditioned.
using LinearAlgebra #load stdlib
#install package : no need online, needed on your computer
#Pkg.add(PackageSpec(name="GenericSVD",rev="master"));
using GenericSVD #load package
m = 8
A = [1//(i+j-1) for i=1:m, j=1:m] # 8 x 8 Hilbert matrix of Rationals
8×8 Array{Rational{Int64},2}:
1//1 1//2 1//3 1//4 1//5 1//6 1//7 1//8
1//2 1//3 1//4 1//5 1//6 1//7 1//8 1//9
1//3 1//4 1//5 1//6 1//7 1//8 1//9 1//10
1//4 1//5 1//6 1//7 1//8 1//9 1//10 1//11
1//5 1//6 1//7 1//8 1//9 1//10 1//11 1//12
1//6 1//7 1//8 1//9 1//10 1//11 1//12 1//13
1//7 1//8 1//9 1//10 1//11 1//12 1//13 1//14
1//8 1//9 1//10 1//11 1//12 1//13 1//14 1//15
# 16 x 16 Hilbert is too ill-conditioned for 64-bit arithmetic
m = 16 ; A = [1/(i+j-1) for i=1:m, j=1:m] # 16 x 16 Hilbert matrix of Float64s
@show cond(A)
@show eps(Float64);
cond(A) = 7.865467778431645e17 eps(Float64) = 2.220446049250313e-16
σ = svdvals(A)
σ[end-5:end] # last 5 values
6-element Array{Float64,1}:
2.0859343371840952e-12
3.827799492915146e-14
5.394906952936614e-16
1.2294102317479617e-17
1.0657542513253642e-17
2.3648135052359386e-18
# Make 32 x 32 Hilbert matrix of 256-bit BigFloats and show a few elements
setprecision(256)
m = 32
A = [BigFloat(1//(i+j-1)) for i=1:m, j=1:m];
@show A[1,1]
@show A[2,1]
@show A[3,1]
;
A[1, 1] = 1.0 A[2, 1] = 5.0e-01 A[3, 1] = 3.333333333333333333333333333333333333333333333333333333333333333333333333333348e-01
# Compute singular values of 32 x 32 Hilbert matrix in 256-bit arithmetic
σ = svdvals!(A);
σ[end-5:end] # last 5 values
6-element Array{BigFloat,1}:
4.722253487722886629537566561063426718193936108783761901779058653423762612571911e-35
3.721728303529685301163286716629296819149422130979579196306836707182776329082812e-37
2.282607735977945944697879155526163440375905564402972710884859678322712924025411e-39
1.022045231388037386957792253667740236529498419265717624536070287487104900963478e-41
2.971550767330245214028010047204120157838584421268951012254754735920458055015361e-44
4.210099032402693745703330910061443236619344029786752208732102155291009952659503e-47
Finite scalar field GF(p) is $\mathbb{Z}/p$, the integers modulo p, where p is prime. Example by Andreas Noack, Julia Computing.
# Type definition: Galois fields GF(p), where p is prime modulus, T is integer type
struct GF{p,T} <: Number where {p,T<:Integer}
rep::T # representative integer which holds the value of a GF(p) variable
function GF{p,T}(x::Integer) where {p,T<:Integer}
return new(mod(x, p))
end
end
GF{p}(x::T) where {p,T<:Integer} = GF{p,T}(x)
# Define some basic methods for GF(p) that all Julia Numbers must have
import Base: convert, inv, one, promote_rule, show, zero
function call(::Type{GF{p}}, x::Integer) where {p}
if !isprime(p)
throw(ArgumentError("p must be prime"))
end
return GF{p,typeof(x)}(mod(x, p))
end
convert(::Type{GF{p,T}}, x::Integer) where {p,T} = GF{p}(x)
convert(::Type{GF{p}}, x::Integer) where {p} = GF{p}(x)
convert(::Type{GF{p,T}}, x::GF{p}) where {p,T} = GF{p,T}(x.rep)
promote_rule(::Type{GF{p,T1}}, ::Type{T2}) where {p,T1,T2<:Integer} = GF{p,promote_type(T1,T2)}
show(io::IO, x::GF) = show(io, x.rep);
# Define arithmetic operations on GF(p)
import Base: +, -, *, /,conj
for op in (:+, :-, :*) # loop over ops, defining each in terms of integer ops mod p
@eval begin
($op)(x::GF{p,T}, y::GF{p,T}) where {p,T} = GF{p,T}($(op)(x.rep, y.rep))
end
end
# Define inverse and ÷. Requires more care than +, -, * because dividing by zero throws error
function inv(x::GF{p,T}) where {p,T}
if x == zero(x)
throw(DivideError())
end
r, u, v = gcdx(x.rep, p)
GF{p,T}(u)
end
function conj(x::GF{p,T}) where {p,T}
u = conj(x.rep)
GF{p,T}(u)
end
(/)(x::GF{p}, y::GF{p}) where {p}= x*inv(y);
# Create some GF(7) variables and do arithmetic
x = GF{7}(9) # x = 9 mod 7 = 2
y = GF{7}(12) # y = 12 mod 7 = 5
@show x
@show y
@show x + y # 2 + 5 mod 7 = 0
@show x - y # 2 - 5 mod 7 = 4
@show x * y # 2 * 5 mod 7 = 3
@show x / y # 2 ÷ 5 mod 7 = 6, because 2 = 6*5 mod 7
;
x = 2 y = 5 x + y = 0 x - y = 4 x * y = 3 x / y = 6
import Random: seed!
seed!(1234)
A = [GF{7}(rand(0:6)) for i = 1:4, j = 1:4]; # matrix of random GF(7) elems
b = [GF{7}(rand(0:6)) for i = 1:4]; # random vector b for Ax=b problem
x = A\b; # solve Ax=b over GF(p)!
A*x - b # check that x satisfies Ax=b
4-element Array{GF{7,Int64},1}:
0
0
0
0
Whoa! Built-in backslash operator on matrix of GF(p) worked!
Julia generated and compiled a GF(p)-specific version of its generic LU decomp function. All it needs is definitions of $+, -, \times, \div$.
Solve the Lorenz equations:
$$ \begin{align} \frac{dx}{dt} &= σ(y-x) \\ \frac{dy}{dt} &= x(ρ-z) - y \\ \frac{dz}{dt} &= xy - βz \\ \end{align} $$#Pkg.add("DifferentialEquations") # add the Differential equation suite
using DifferentialEquations # first time is very slow (precompilation)
using Plots